# Contrastive learning
Bloomz 560m Retriever V2
Openrail
A dual encoder based on the Bloomz-560m-dpo-chat model, designed to map articles and queries into the same vector space, supporting cross-language retrieval in French and English.
Text Embedding
Transformers Supports Multiple Languages

B
cmarkea
17
2
Setfit All MiniLM L6 V2 Sst2 32 Shot
Apache-2.0
This is a SetFit model trained on the sst2 dataset for English text classification tasks, utilizing efficient few-shot learning techniques.
Text Classification English
S
tomaarsen
23
7
Mmlw Retrieval Roberta Base
Apache-2.0
MMLW (I Must Get Better News) is a Polish neural text encoder optimized for information retrieval tasks, capable of converting queries and passages into 768-dimensional vectors.
Text Embedding
Transformers Other

M
sdadas
408
1
Vit B 16 SigLIP I18n 256
Apache-2.0
A SigLIP (Sigmoid Loss for Language-Image Pre-training) model trained on the WebLI dataset, suitable for zero-shot image classification tasks.
Text-to-Image
V
timm
87.92k
3
Vit SO400M 14 SigLIP
Apache-2.0
A SigLIP (Sigmoid Loss for Language-Image Pretraining) model trained on the WebLI dataset, suitable for zero-shot image classification tasks.
Text-to-Image
V
timm
79.55k
17
Minilm L6 Keyword Extraction
Other
This is a sentence embedding model based on the MiniLM architecture that can map text to a 384-dimensional vector space and is suitable for semantic search and clustering tasks.
Text Embedding English
M
valurank
13.19k
13
All MiniLM L6 V2
Apache-2.0
A lightweight sentence embedding model based on the MiniLM architecture that can map text to a 384-dimensional vector space, suitable for semantic search and clustering tasks.
Text Embedding English
A
optimum
171.02k
18
All Mpnet Base V2
MIT
This is a sentence embedding model based on the MPNet architecture, capable of mapping text to a 768-dimensional vector space, suitable for semantic search and sentence similarity tasks.
Text Embedding English
A
navteca
14
1
Declutr Base
Apache-2.0
DeCLUTR-base is a universal sentence encoder model trained through deep contrastive learning for generating high-quality text representations.
Text Embedding English
D
johngiorgi
99
7
Declutr Small
Apache-2.0
DeCLUTR-small is a general-purpose sentence encoder model based on deep contrastive learning for generating high-quality sentence embeddings.
Text Embedding English
D
johngiorgi
56
3
Umlsbert ENG
Apache-2.0
CODER is a knowledge-infused cross-language medical terminology embedding model focused on medical terminology standardization tasks.
Knowledge Graph
Transformers English

U
GanjinZero
3,400
13
Coder All
Apache-2.0
CODER is a knowledge-fused cross-language medical terminology embedding model for medical terminology standardization.
Knowledge Graph
Transformers English

C
GanjinZero
20
3
Featured Recommended AI Models